751 research outputs found

    Attentional capture by entirely irrelevant distractors

    Get PDF
    Studies of attentional capture often question whether an irrelevant distractor will capture attention or be successfully ignored (e.g., Folk & Remington, 1998). Here we establish a new measure of attentional capture by distractors that are entirely irrelevant to the task in terms of visual appearance, meaning, and location (colourful cartoon figures presented in the periphery while subjects perform a central letter-search task). The presence of such a distractor significantly increased search RTs, suggesting it captured attention despite its task-irrelevance. Such attentional capture was found regardless of whether the search target was a singleton or not, and for both frequent and infrequent distractors, as well as for meaningful and meaningless distractor stimuli, although the cost was greater for infrequent and meaningful distractors. These results establish stimulus-driven capture by entirely irrelevant distractors and thus provide a demonstration of attentional capture that is more akin to distraction by irrelevant stimuli in daily life

    A Blind Search for Magnetospheric Emissions from Planetary Companions to Nearby Solar-type Stars

    Full text link
    This paper reports a blind search for magnetospheric emissions from planets around nearby stars. Young stars are likely to have much stronger stellar winds than the Sun, and because planetary magnetospheric emissions are powered by stellar winds, stronger stellar winds may enhance the radio luminosity of any orbiting planets. Using various stellar catalogs, we selected nearby stars (<~ 30 pc) with relatively young age estimates (< 3 Gyr). We constructed different samples from the stellar catalogs, finding between 100 and several hundred stars. We stacked images from the 74-MHz (4-m wavelength) VLA Low-frequency Sky Survey (VLSS), obtaining 3\sigma limits on planetary emission in the stacked images of between 10 and 33 mJy. These flux density limits correspond to average planetary luminosities less than 5--10 x 10^{23} erg/s. Using recent models for the scaling of stellar wind velocity, density, and magnetic field with stellar age, we estimate scaling factors for the strength of stellar winds, relative to the Sun, in our samples. The typical kinetic energy carried by the stellar winds in our samples is 15--50 times larger than that of the Sun, and the typical magnetic energy is 5--10 times larger. If we assume that every star is orbited by a Jupiter-like planet with a luminosity larger than that of the Jovian decametric radiation by the above factors, our limits on planetary luminosities from the stacking analysis are likely to be a factor of 10--100 above what would be required to detect the planets in a statistical sense. Similar statistical analyses with observations by future instruments, such as the Low Frequency Array (LOFAR) and the Long Wavelength Array (LWA), offer the promise of improvements by factors of 10--100.Comment: 11 pages; AASTeX; accepted for publication in A

    An analysis of the time course of attention in preview search.

    Get PDF
    We used a probe dot procedure to examine the time course of attention in preview search (Watson and Humphreys, 1997). Participants searched for an outline red vertical bar among other new red horizontal bars and old green vertical bars, superimposed on a blue background grid. Following the reaction time response for search, the participants had to decide whether a probe dot had briefly been presented. Previews appeared for 1,000 msec and were immediately followed by search displays. In Experiment 1, we demonstrated a standard preview benefit relative to a conjunction search baseline. In Experiment 2, search was combined with the probe task. Probes were more difficult to detect when they were presented 1,200 msec, relative to 800 msec, after the preview, but at both intervals detection of probes at the locations of old distractors was harder than detection on new distractors or at neutral locations. Experiment 3A demonstrated that there was no difference in the detection of probes at old, neutral, and new locations when probe detection was the primary task and there was also no difference when all of the shapes appeared simultaneously in conjunction search (Experiment 3B). In a final experiment (Experiment 4), we demonstrated that detection on old items was facilitated (relative to neutral locations and probes at the locations of new distractors) when the probes appeared 200 msec after previews, whereas there was worse detection on old items when the probes followed 800 msec after previews. We discuss the results in terms of visual marking and attention capture processes in visual search

    Size Matters: Large Objects Capture Attention in Visual Search

    Get PDF
    Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection

    Competition between auditory and visual spatial cues during visual task performance

    Get PDF
    There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is interaction between exogenous auditory and visual capture. Participants preformed an orthogonal cueing task, in which, the visual target was preceded by both a peripheral visual and auditory cue. When both cues were presented at chance level, visual and auditory capture was observed. However, when the validity of the visual cue was increased to 80% only visual capture and no auditory capture was observed. Furthermore, a highly predictive (80% valid) auditory cue was not able to prevent visual capture. These results demonstrate that crossmodal auditory capture does not occur when a competing predictive visual event is presented and is therefore not a fully automatic process
    • …
    corecore